6 research outputs found

    Decisional enhancement and autonomy: public attitudes towards overt and covert nudges

    Get PDF
    Ubiquitous cognitive biases hinder optimal decision making. Recent calls to assist decision makers in mitigating these biases—via interventions commonly called “nudges”—have been criticized as infringing upon individual autonomy. We tested the hypothesis that such “decisional enhancement” programs that target overt decision making—i.e., conscious, higher-order cognitive processes—would be more acceptable than similar programs that affect covert decision making—i.e., subconscious, lower-order processes. We presented respondents with vignettes in which they chose between an option that included a decisional enhancement program and a neutral option. In order to assess preferences for overt or covert decisional enhancement, we used the contrastive vignette technique in which different groups of respondents were presented with one of a pair of vignettes that targeted either conscious or subconscious processes. Other than the nature of the decisional enhancement, the vignettes were identical, allowing us to isolate the influence of the type of decisional enhancement on preferences. Overall, we found support for the hypothesis that people prefer conscious decisional enhancement. Further, respondents who perceived the influence of the program as more conscious than subconscious reported that their decisions under the program would be more “authentic”. However, this relative favorability was somewhat contingent upon context. We discuss our results with respect to the implementation and ethics of decisional enhancement

    Cyborg Consumers: When Human Enhancement Technologies Are Dehumanizing

    No full text
    New technologies are providing unprecedented opportunities for consumers to enhance their bodies and minds, including traits typically seen as comprising "humanness." We show that such enhancements can be dehumanizing, and explore how the perceived naturalness of the means and outcome of enhancement can explain this technological dehumanization

    Decisional enhancement and autonomy: public attitudes towards overt and covert nudges

    No full text
    Ubiquitous cognitive biases hinder optimal decision making. Recent calls to assist decision makers in mitigating these biases---via interventions commonly called ``nudges''---have been criticized as infringing upon individual autonomy. We tested the hypothesis that such ``decisional enhancement'' programs that target overt decision making---i.e., conscious, higher-order cognitive processes---would be more acceptable than similar programs that affect covert decision making---i.e., subconscious, lower-order processes. We presented respondents with vignettes in which they chose between an option that included a decisional enhancement program and a neutral option. In order to assess preferences for overt or covert decisional enhancement, we used the contrastive vignette technique in which different groups of respondents were presented with one of a pair of vignettes that targeted either conscious or subconscious processes. Other than the nature of the decisional enhancement, the vignettes were identical, allowing us to isolate the influence of the type of decisional enhancement on preferences. Overall, we found support for the hypothesis that people prefer conscious decisional enhancement. Further, respondents who perceived the influence of the program as more conscious than subconscious reported that their decisions under the program would be more ``authentic''. However, this relative favorability was somewhat contingent upon context. We discuss our results with respect to the implementation and ethics of decisional enhancement

    Understanding and Improving Consumer Reactions to Service Bots

    No full text
    Many firms are beginning to replace customer service employees with bots, from humanoid service robots to digital chatbots. Using real human-bot interactions in lab and field settings, we study consumers’ evaluations of bot-provided service. We find that service evaluations are more negative when the service provider is a bot versus a human—even when the provided service is identical. This effect is explained by consumers’ belief that service automation is motivated by firm benefits (i.e., cutting costs) at the expense of customer benefits (such as service quality). The effect is eliminated when firms share the economic surplus derived from automation with consumers through price discounts. The effect is reversed when service bots provide unambiguously superior service to human employees—a scenario that may soon become reality. Consumers’ default reactions to service bots are therefore largely negative but can be equal to or better than reactions to human service providers if firms can demonstrate how automation benefits consumers
    corecore